Record native/DOM methods calls in the native tracer
Categories
(DevTools :: Debugger, enhancement)
Tracking
(firefox133 fixed)
Tracking | Status | |
---|---|---|
firefox133 | --- | fixed |
People
(Reporter: ochameau, Assigned: alexical)
References
Details
Attachments
(3 files, 4 obsolete files)
This is similar to bug 1835089, but based on bug 1911021.
This goals is to extend to JavaScript tracer to record calls made to non-JS methods (DOM and native methods).
This would help track various WebCompat issues which relates to querying userAgent/navigator APIs, as well as being able to easily track usage of particular Web features.
Once we have such feature, we could also provide a summary which would highlight all features being used by a page or a particular action on a page. Something similar to bug 1908615, but about a few meaningful DOM methods.
Assignee | ||
Updated•9 months ago
|
Assignee | ||
Comment 1•9 months ago
|
||
Reporter | ||
Comment 2•9 months ago
|
||
Comment 3•9 months ago
|
||
Adding instrumentation to the WebIDL generated code can have huge impacts on the code size. After I added profiler instrumentation to it, I ended up spending about a month trying to shave off instructions one by one in order to limit the code size impact, see bug 1499507. In bug 1499507 comment 2 I wrote that "We have around 9000 WebIDL constructors + methods + getters + setters". This was six years ago and I'm sure that number has only grown.
You may run into similar difficulties in this bug.
Assignee | ||
Comment 4•9 months ago
|
||
This allows us to drive the execution tracer with the profiler's backend,
and collect the trace natively where it will be much faster than in JS. This
is not an optimal solution; there is quite a bit more copying than I would
like to get the data across the gecko/spidermonkey boundary, but doing
anything more sophisticated at this point feels like a waste of effort given
the stage of the project.
The other purpose of this change is to allow the profiler to turn tracing
on for all profiled threads. It achieves this via an atomic indicator which
will be picked up by the thread and used to begin tracing as soon as that
thread starts running JS.
Assignee | ||
Comment 5•9 months ago
|
||
Reporter | ||
Comment 6•9 months ago
|
||
Note for Alex: these two patches should rather be attached to bug 1911021, so that this current bug can focus on discussing about native method call tracing.
(In reply to Markus Stange [:mstange] from comment #3)
Adding instrumentation to the WebIDL generated code can have huge impacts on the code size. After I added profiler instrumentation to it, I ended up spending about a month trying to shave off instructions one by one in order to limit the code size impact, see bug 1499507. In bug 1499507 comment 2 I wrote that "We have around 9000 WebIDL constructors + methods + getters + setters". This was six years ago and I'm sure that number has only grown.
You may run into similar difficulties in this bug.
So, about tracing native methods.
I know it would prevent tracing native methods calls on JITed code , and/or force to disable JIT when tracing,
but there is a more conservative way to track native method calls in Spidermonkey.
This is based on Debugger.onNativeCall
, which I tried using in the slow full-JS tracer implementation in bug 1835089 (which I deprioritized because I was struggling to handle the many Debugger instance usecase).
I would be curious to know if that's something we could consider using?
This native calls tracing starts from these two precise callsites in spidermonkeys:
https://searchfox.org/mozilla-central/search?q=symbol:_ZN2js8DebugAPI12onNativeCallEP9JSContextRKN2JS8CallArgsENS_10CallReasonE&redirect=false
Note that there is value in contributing to this existing hook, as it could also, ultimately allow us to implement breakpoint on some specific method calls.
Having said that, I'm also curious about other ideas to address Markus concerns.
Assignee | ||
Comment 7•9 months ago
|
||
(In reply to Markus Stange [:mstange] from comment #3)
Adding instrumentation to the WebIDL generated code can have huge impacts on the code size. After I added profiler instrumentation to it, I ended up spending about a month trying to shave off instructions one by one in order to limit the code size impact, see bug 1499507. In bug 1499507 comment 2 I wrote that "We have around 9000 WebIDL constructors + methods + getters + setters". This was six years ago and I'm sure that number has only grown.
You may run into similar difficulties in this bug.
I intend on just playing with this a bit and testing solutions out, but I am aware of this. Worst case this is just a Nightly-only thing, but shouldn't it be fine to just MOZ_NEVER_INLINE
ProfilingStack::pushLabelFrame
and put our extra instrumentation inside there? I don't imagine the perf hit during profiling would be big enough to care about...?
Comment 8•9 months ago
|
||
It's worth a try! But just because we're profiling it doesn't mean we can just make everything slow. There's a benchmark in attachment 8891158 [details] which, on my arm64 machine, shows 4ns with the profiler off and 16ns with the profiler on, today. I would like us to not regress the "profiler on" number too much unless we must.
Comment 9•9 months ago
|
||
How much bigger are we talking about in terms of binary size, if we didn't add any mitigations?
Comment 10•9 months ago
|
||
Comment on attachment 9422734 [details]
Bug 1915169 - Support profiler-driven execution tracing r?arai
Revision D221102 was moved to bug 1911021. Setting attachment 9422734 [details] to obsolete.
Comment 11•9 months ago
|
||
Comment on attachment 9422735 [details]
Bug 1915169 - Add JS Execution Tracing option to the profiler r?aabh
Revision D221103 was moved to bug 1911021. Setting attachment 9422735 [details] to obsolete.
Comment 12•9 months ago
|
||
Comment on attachment 9422153 [details]
Bug 1915169 - Cover JavaScript tracing via the profiler with a mochitest.
Revision D220849 was moved to bug 1911021. Setting attachment 9422153 [details] to obsolete.
Reporter | ||
Comment 13•9 months ago
|
||
(In reply to Brian Grinstead [:bgrins] from comment #9)
How much bigger are we talking about in terms of binary size, if we didn't add any mitigations?
Markus mentioned WebIDL instrumentation in comment 3, but Alex went straight for adding JS Tracer instrumentation into ProfilingStack::pushLabelFrame
. So outside of WebIDL to benefit from the existing profiler intrumentation of WebIDL, in exchange of a possible regression of the profiler sampling performance (to be discussed with actual numbers).
So. We don't have numbers about intrumention WebIDL for JS Tracing. Unless you tried WebIDL instrumentation locally :alexical?
In case you did, would you mind sharing a patch so that we can push to try and see the impact on the binary?
Otherwise looking at 2018's numbers related to profiler WebIDL instrumentation.
If I read bug 1437772 correctly, we were talking about <=1% regression. 334k on windows.
Reporter | ||
Comment 14•9 months ago
•
|
||
(In reply to Markus Stange [:mstange] from comment #8)
It's worth a try! But just because we're profiling it doesn't mean we can just make everything slow. There's a benchmark in attachment 8891158 [details] which, on my arm64 machine, shows 4ns with the profiler off and 16ns with the profiler on, today. I would like us to not regress the "profiler on" number too much unless we must.
I do have a special setup, running Firefox from a VMWare VM.
Without the profiler, attachment 8891158 [details] report 3-4ns
and 90-100ns with the profiler and the current patches attached to bug 1911021 (which are currently also tracing native calls via ProfilingStack::pushLabelFrame
hook).
Without these patches and still with the profiler I'm at 6-7ns.
:alexical, could you pull out the changes made to ProfilingStack::pushLabelFrame
to this bug so that it can be investigated independently of the overall profiler integration?
Assignee | ||
Comment 15•9 months ago
|
||
Assignee | ||
Comment 16•9 months ago
|
||
(In reply to Alexandre Poirot [:ochameau] from comment #14)
:alexical, could you pull out the changes made to
ProfilingStack::pushLabelFrame
to this bug so that it can be investigated independently of the overall profiler integration?
To be clear though the changes in bug 1911021's patch are not what I intend to put up for review for this - they just snuck in. If tracing is disabled we just should be adding another boolean check. I attached a patch which does roughly what I'm hoping to do, although there's some wiggle room. Note that this patch requires the profiler to be running, so it's a bit awkward without bug 1911021, but it demonstrates the overhead. On my machine, I get 3-4ns when not profiling and 5-6 when profiling. Don't have binary size measurements yet.
Reporter | ||
Comment 17•8 months ago
|
||
Updated•8 months ago
|
Assignee | ||
Comment 18•8 months ago
|
||
Comment 19•7 months ago
|
||
Comment 20•7 months ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/36d1bfadf03d
https://hg.mozilla.org/mozilla-central/rev/7e1f69bfa404
Updated•7 months ago
|
Description
•